19 research outputs found
Automatic extraction of retinal features to assist diagnosis of glaucoma disease
Glaucoma is a group of eye diseases that have common traits such as high eye
pressure, damage to the Optic Nerve Head (ONH) and gradual vision loss. It affects
the peripheral vision and eventually leads to blindness if left untreated. The current
common methods of diagnosis of glaucoma are performed manually by the clinicians.
Clinicians perform manual image operations such as change of contrast, zooming in
zooming out etc to observe glaucoma related clinical indications. This type of diagnostic
process is time consuming and subjective. With the advancement of image and
vision computing, by automating steps in the diagnostic process, more patients can be
screened and early treatment can be provided to prevent any or further loss of vision.
The aim of this work is to develop a system called Glaucoma Detection Framework
(GDF), which can automatically determine changes in retinal structures and imagebased
pattern associated with glaucoma so as to assist the eye clinicians for glaucoma
diagnosis in a timely and effective manner. In this work, several major contributions
have been made towards the development of the automatic GDF consisting of the
stages of preprocessing, optic disc and cup segmentation and regional image feature
methods for classification between glaucoma and normal images.
Firstly, in the preprocessing step, a retinal area detector based on superpixel classification model has been developed in order to automatically determine true retinal
area from a Scanning Laser Ophthalmoscope (SLO) image. The retinal area detector
can automatically extract artefacts out from the SLO image while preserving the computational
effciency and avoiding over-segmentation of the artefacts. Localization of
the ONH is one of the important steps towards the glaucoma analysis. A new weighted
feature map approach has been proposed, which can enhance the region of ONH for
accurate localization. For determining vasculature shift, which is one of glaucoma indications,
we proposed the ONH cropped image based vasculature classification model
to segment out the vasculature from the ONH cropped image. The ONH cropped image based vasculature classification model is developed in order to avoid misidentification
of optic disc boundary and Peripapillary Atrophy (PPA) around the ONH of
being a part of the vasculature area.
Secondly, for automatic determination of optic disc and optic cup boundaries, a
Point Edge Model (PEM), a Weighted Point Edge Model (WPEM) and a Region
Classification Model (RCM) have been proposed. The RCM initially determines the
optic disc region using the set of feature maps most suitable for the region classification
whereas the PEM updates the contour using the force field of the feature maps with
strong edge profile. The combination of PEM and RCM entitled Point Edge and
Region Classification Model (PERCM) has significantly increased the accuracy of optic
disc segmentation with respect to clinical annotations around optic disc. On the other
hand, the WPEM determines the force field using the weighted feature maps calculated
by the RCM for optic cup in order to enhance the optic cup region compared to rim
area in the ONH. The combination of WPEM and RCM entitled Weighted Point Edge
and Region Classification Model (WPERCM) can significantly enhance the accuracy
of optic cup segmentation.
Thirdly, this work proposes a Regional Image Features Model (RIFM) which can
automatically perform classification between normal and glaucoma images on the basis
of regional information. Different from the existing methods focusing on global
features information only, our approach after optic disc localization and segmentation
can automatically divide an image into five regions (i.e. optic disc or Optic Nerve
Head (ONH) area, inferior (I), superior(S), nasal(N) and temporal(T)). These regions
are usually used for diagnosis of glaucoma by clinicians through visual observation
only. It then extracts image-based information such as textural, spatial and frequency
based information so as to distinguish between normal and glaucoma images. The
method provides a new way to identify glaucoma symptoms without determining any
geometrical measurement associated with clinical indications glaucoma.
Finally, we have accommodated clinical indications of glaucoma including the CDR,
vasculature shift and neuroretinal rim loss with the RIFM classification and performed
automatic classification between normal and glaucoma images. Since based on the clinical
literature, no geometrical measurement is the guaranteed sign of glaucoma, the
accommodation of the RIFM classification results with clinical indications of glaucoma can lead to more accurate classification between normal and glaucoma images. The
proposed methods in this work have been tested against retinal image databases of
208 fundus images and 102 Scanning Laser Ophthalmoscope (SLO) images. These
databases have been annotated by the clinicians around different anatomical structures
associated with glaucoma as well as annotated with healthy or glaucomatous
images. In fundus images, ONH cropped images have resolution varying from 300 to
900 whereas in SLO images, the resolution is 341 x 341. The accuracy of classification
between normal and glaucoma images on fundus images and the SLO images is 94.93%
and 98.03% respectively
Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: a review
Glaucoma is a group of eye diseases that have common traits such as, high eye pressure, damage to the Optic Nerve Head and gradual vision loss. It affects peripheral vision and eventually leads to blindness if left untreated. The current common methods of pre-diagnosis of Glaucoma include measurement of Intra-Ocular Pressure (IOP) using Tonometer, Pachymetry, Gonioscopy; which are performed manually by the clinicians. These tests are usually followed by Optic Nerve Head (ONH) Appearance examination for the confirmed diagnosis of Glaucoma. The diagnoses require regular monitoring, which is costly and time consuming. The accuracy and reliability of diagnosis is limited by the domain knowledge of different ophthalmologists. Therefore automatic diagnosis of Glaucoma attracts a lot of attention.This paper surveys the state-of-the-art of automatic extraction of anatomical features from retinal images to assist early diagnosis of the Glaucoma. We have conducted critical evaluation of the existing automatic extraction methods based on features including Optic Cup to Disc Ratio (CDR), Retinal Nerve Fibre Layer (RNFL), Peripapillary Atrophy (PPA), Neuroretinal Rim Notching, Vasculature Shift, etc., which adds value on efficient feature extraction related to Glaucoma diagnosis. © 2013 Elsevier Ltd
Retinal area detector from Scanning Laser Ophthalmoscope (SLO) images for diagnosing retinal diseases
© 2014 IEEE. Scanning laser ophthalmoscopes (SLOs) can be used for early detection of retinal diseases. With the advent of latest screening technology, the advantage of using SLO is its wide field of view, which can image a large part of the retina for better diagnosis of the retinal diseases. On the other hand, during the imaging process, artefacts such as eyelashes and eyelids are also imaged along with the retinal area. This brings a big challenge on how to exclude these artefacts. In this paper, we propose a novel approach to automatically extract out true retinal area from an SLO image based on image processing and machine learning approaches. To reduce the complexity of image processing tasks and provide a convenient primitive image pattern, we have grouped pixels into different regions based on the regional size and compactness, called superpixels. The framework then calculates image based features reflecting textural and structural information and classifies between retinal area and artefacts. The experimental evaluation results have shown good performance with an overall accuracy of 92%
An automated cloud-based big data analytics platform for customer insights
Product reviews have a significant influence
on strategic decisions for both businesses and customers on
what to produce or buy. However, with the availability of
large amounts of online information, manual analysis of
reviews is costly and time consuming, as well as being
subjective and prone to error. In this work, we present an
automated scalable cloud-based system to harness big
customer reviews on products for customer insights
through data pipeline from data acquisition, analysis to
visualisation in an efficient way. The experimental
evaluation has shown that the proposed system achieves
good performance in terms of accuracy and computing
time
Ethics and biomedical engineering for well-being : a cocreation study of remote services for monitoring and support
The well-being of students and staff directly affects their output and efficiency. This study presents the results of two focus groups conducted in 2022 within a two-phase project led by the Applied Biomedical and Signal Processing Intelligent e-Health Lab, School of Engineering at the University of Warwick, and British Telecom within “The Connected Campus: University of Warwick case study” program. The first phase, by involving staff and students at the University of Warwick, aimed at collecting preliminary information for the subsequent second phase, about the feasibility of the use of Artificial Intelligence and Internet of Things for well-being support on Campus. The main findings of this first phase are interesting technological suggestions from real users. The users helped in the design of the scenarios and in the selection of the key enabling technologies which they considered as the most relevant, useful and acceptable to support and improve well-being on Campus. These results will inform future services to design and implement technologies for monitoring and supporting well-being, such as hybrid, minimal and even intrusive (implantable) solutions. The user-driven co-design of such services, leveraging the use of wearable devices and Artificial Intelligence deployment will increase their acceptability by the users
N-Beats as an EHG signal forecasting method for labour prediction in full term pregnancy
The early prediction of onset labour is critical for avoiding the risk of death due to pregnancy delay. Low-income countries often struggle to deliver timely service to pregnant women due to a lack of infrastructure and healthcare facilities, resulting in pregnancy complications and, eventually, death. In this regard, several artificial-intelligence-based methods have been proposed based on the detection of contractions using electrohysterogram (EHG) signals. However, the forecasting of pregnancy contractions based on real-time EHG signals is a challenging task. This study proposes a novel model based on neural basis expansion analysis for interpretable time series (N-BEATS) which predicts labour based on EHG forecasting and contraction classification over a given time horizon. The publicly available TPEHG database of Physiobank was exploited in order to train and test the model, where signals from full-term pregnant women and signals recorded after 26 weeks of gestation were collected. For these signals, the 30 most commonly used classification parameters in the literature were calculated, and principal component analysis (PCA) was utilized to select the 15 most representative parameters (all the domains combined). The results show that neural basis expansion analysis for interpretable time series (N-BEATS) forecasting can forecast EHG signals through training after few iterations. Similarly, the forecasting signal’s duration is determined by the length of the recordings. We then deployed XG-Boost, which achieved the classification accuracy of 99 percent, outperforming the state-of-the-art approaches using a number of classification features greater than or equal to 15
Recommended from our members
Toward a Symbolic AI Approach to the WHO/ACSM Physical Activity Sedentary Behavior Guideline
The World Health Organization and the American College of Sports Medicine have released guidelines on physical activity and sedentary behavior, as part of an effort to reduce inactivity worldwide. However, to date, there is no computational model that can facilitate the integration of these recommendations into health solutions (e.g., digital coaches). In this paper, we present an operational and machine-readable model that represents and is able to reason about these guidelines. To this end, we adopted a symbolic AI approach that combines two paradigms of research in knowledge representation and reasoning: ontology and rules. Thus, we first present HeLiFit, a domain ontology implemented in OWL, which models the main entities that characterize the definition of physical activity, as defined per guidance. Then, we describe HeLiFit-Rule, a set of rules implemented in the RDFox Rule language, which can be used to represent and reason with these recommendations in concrete real-world applications. Furthermore, to ensure a high level of syntactic/semantic interoperability across different systems, our framework is also compliant with the FHIR standard. Through motivating scenarios that highlight the need for such an implementation, we finally present an evaluation of our model that provides results that are both encouraging in terms of the value of our solution and also provide a basis for future work
Recommended from our members
Towards a Symbolic AI Approach to the WHO/ACSM Physical Activity & Sedentary Behaviour Guidelines
The World Health Organization and the American College of Sports Medicine have released guidelines on physical activity and sedentary behaviour, as part of an effort to reduce inactivity world-wide. However, to date, there is no computational model that can facilitate the integration of these recommendations into health solutions (e.g., Digital Coaches). In this paper, we present an operational and machine-readable model that represents and is able to reason about these guidelines. To this end, we adopted a Symbolic AI approach that combines two paradigms of research in Knowledge Representation and Reasoning: Ontology and Rules. Thus, we first present HeLiFit, a domain ontology implemented in OWL, which models the main entities that characterize the definition of physical activity, as defined per guidance. Then, we describe HeLiFit-Rule, a set of rules implemented in the RDFox Rule language, which can be used to represent and reason with these recommendations in concrete real-world applications. Furthermore, to ensure a high level of syntactic/semantic interoperability across different systems, our framework is also compliant with the FHIR standard. Through motivating scenarios that highlight the need for such an implementation, we finally present an evaluation of our model that provides results that are both encouraging in terms of the value of our solution, and also provide a basis for future work
Regional Image Features Model for Automatic Classification between Normal and Glaucoma in Fundus and Scanning Laser Ophthalmoscopy (SLO) Images
Glaucoma is one of the leading causes of blindness. There is no cure for glaucoma but detection at its earliest stage and subsequent treatment can aid patients to prevent blindness. Currently, optic disc and retinal imaging facilitates glaucoma detection but this method still requires manual post-imaging modifications that are time-consuming and do not totally remove subjectivity in image assessment. Therefore, it is necessary to automate this process. In this work, we have first proposed a novel computer aided approach for automatic glaucoma detection based on Regional Image Features Model (RIFM) which can automatically perform classification between normal and glaucoma images on the basis of regional information. Different from all the existing methods, our approach can extract both geometric (e.g. morphometric properties) and non-geometric based properties (e.g. pixel appearance/intensity values, texture) from images and significantly increase the classification performance. Our proposed approach consists of three new major contributions including automatic localisation of optic disc, automatic segmentation of disc, and classification between normal and glaucoma based on geometric and non-geometric properties of different regions of an image. We have compared our method with existing approaches and tested it on both fundus and Scanning laser ophthalmoscopy (SLO) images. The experimental results show that our proposed approach outperforms the state-of-the-art approaches using either geometric or non-geometric properties. The overall glaucoma classification accuracy for fundus is 94.4% and accuracy of detection of suspicion of glaucoma in SLO images is 93.9%
Advances in Artificial Intelligence, Machine Learning and Deep Learning Applications
Recent advances in the field of artificial intelligence (AI) have been pivotal in enhancing the effectiveness and efficiency of many systems and in all fields of knowledge, including medical diagnosis [...